Lecture 4: Derandomization 4.1 Rp Error Reduction
نویسنده
چکیده
Consider an RP algorithm with constant error probability that uses r random bits. We will improve the error probability to 2−k with r + O(k) random bits. Compare this with (1) the brute force k-independent trials, which would require O(kr) random bits to achieve the same error probability and (2) the technique due to Karp, Pippenger and Sipser [KPS] (discussed in Lecture 1) which uses r random bits (i.e., no extra random bits) and achieves an error probability of 1/poly(r). We will use random walks on expanders to reduce the error of RP algorithms. Ajtai, Komlos and Szemeredi first used random walks on expanders in the context of small-space derandomization [AKS]. The proof we present in lecture is due to Impagliazzo and Zuckerman [IZ]. The KPS technique, though great in terms of the number of extra random bits being used is limited by the fact that the running time of the improved algorithm is at least poly(1/δ) where δ is the (new reduced) error of the algorithm. Hence, we can reduce the error to at most 1/poly. The technique, discussed today, will further reduce the error to 2−k at the cost of only O(k) extra random bits as opposed O(rk) random bits in the k independent trails, while the algorithm still runs in (randomized) polynomial time. As in KPS, we will use a d-regular expander with V = {0, 1}r, thus |V | = 2r and d is a constant. As in KPS, we will assume that there exists an implicit construction of such expanders in the following sense: given any vertex v and any index i in the range 1 . . . d (where d is the degree of the expander), we can in time polynomial in |v| and |i|, compute the ith-neighbor of v. The expanders constructions we will discuss later in the course will satisfy such strong properties. Recall that to find witnesses, KPS began at a random vertex and completely explored all vertices within a ball of radius O(k). Here, we also start at a random vertex but instead of exploring all vertices in a ball, we will walk randomly for k steps and run the original RP algorithm along all vertices along this random walk. Thus the total randomness uses is at most r + k log d since log d bits are required to choose a random neighbor.
منابع مشابه
Easiness Assumptions and Hardness Tests: Trading Time for Zero Error
We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to find possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization method based on an easiness assumption fails, then we obtain a certain hardness test that can be used t...
متن کاملPolynomial Time with Restricted Use of Randomness
We define a hierarchy of complexity classes that lie between P and RP, yielding a new way of quantifying partial progress towards the derandomization of RP. A standard approach in derandomization is to reduce the number of random bits an algorithm uses. We instead focus on a model of computation that allows us to quantify the extent to which random bits are being used. More specifically, we con...
متن کاملEasiness Assumptions and Hardness Tests : Trading Time for Zero
We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization method based on an easiness assumption fails, then we obtain a certain hardness test that can be used to ...
متن کاملLecture 17: Space-bounded Derandomization
The randomized result was obtained by viewing random bit sequences as vertices of an expander graph and performing a random walk upon choosing a start vertex uniformly at random, and casting a majority vote. The error (probability of majority vote resulting in error) exponentially decreases with the length of the random walk. We also saw a stronger statement based on Chernoff bounds for random ...
متن کاملLower Bounds for Quantum Search and Derandomization
We prove lower bounds on the error probability of a quantum algorithm for searching through an unordered list of N items, as a function of the number T of queries it makes. In particular, if T ∈ O( √ N) then the error is lower bounded by a constant. If we want error ≤ 1/2 then we need T ∈ Ω(N) queries. We apply this to show that a quantum computer cannot do much better than a classical computer...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005